Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            In this full research paper, we discuss the benefits and challenges of using GPT-4 to perform qualitative analysis to identify faculty’s mental models of assessment. Assessments play an important role in engineering education. They are used to evaluate student learning, measure progress, and identify areas for improvement. However, how faculty members approach assessments can vary based on several factors, including their own mental models of assessment. To understand the variation in these mental models, we conducted interviews with faculty members in various engineering disciplines at universities across the United States. Data was collected from 28 participants from 18 different universities. The interviews consisted of questions designed to elicit information related to the pieces of mental models (state, form, function, and purpose) of assessments of students in their classrooms. For this paper, we analyzed interviews to identify the entities and entity relationships in participant statements using natural language processing and GPT-4 as our language model. We then created a graphical representation to characterize and compare individuals’ mental models of assessment using GraphViz. We asked the model to extract entities and their relationships from interview excerpts, using GPT-4 and instructional prompts. We then compared the results of GPT-4 from a small portion of our data to entities and relationships that were extracted manually by one of our researchers. We found that both methods identified overlapping entity relationships but also discovered entities and relationships not identified by the other model. The GPT-4 model tended to identify more basic relationships, while manual analysis identified more nuanced relationships. Our results do not currently support using GPT-4 to automatically generate graphical representations of faculty’s mental models of assessments. However, using a human-in-the-loop process could help offset GPT-4’s limitations. In this paper, we will discuss plans for our future work to improve upon GPT-4’s current performance.more » « less
- 
            In this full research paper, we discuss the benefits and challenges of using GPT-4 to perform qualitative analysis to identify faculty’s mental models of assessment. Assessments play an important role in engineering education. They are used to evaluate student learning, measure progress, and identify areas for improvement. However, how faculty members approach assessments can vary based on several factors, including their own mental models of assessment. To understand the variation in these mental models, we conducted interviews with faculty members in various engineering disciplines at universities across the United States. Data was collected from 28 participants from 18 different universities. The interviews consisted of questions designed to elicit information related to the pieces of mental models (state, form, function, and purpose) of assessments of students in their classrooms. For this paper, we analyzed interviews to identify the entities and entity relationships in participant statements using natural language processing and GPT-4 as our language model. We then created a graphical representation to characterize and compare individuals’ mental models of assessment using GraphViz. We asked the model to extract entities and their relationships from interview excerpts, using GPT-4 and instructional prompts. We then compared the results of GPT-4 from a small portion of our data to entities and relationships that were extracted manually by one of our researchers. We found that both methods identified overlapping entity relationships but also discovered entities and relationships not identified by the other model. The GPT-4 model tended to identify more basic relationships, while manual analysis identified more nuanced relationships. Our results do not currently support using GPT-4 to automatically generate graphical representations of faculty’s mental models of assessments. However, using a human-in-the-loop process could help offset GPT-4’s limitations. In this paper, we will discuss plans for our future work to improve upon GPT-4’s current performance.more » « less
- 
            The emergence of generative artificial intelligence (GAI) has started to introduce a fundamental reexamination of established teaching methods. These GAI systems offer a chance for both educators and students to reevaluate their academic endeavors. Reevaluation of current practices is particularly pertinent in assessment within engineering instruction, where advanced generative text algorithms are proficient in addressing intricate challenges like those found in engineering courses. While this juncture presents a moment to revisit general assessment methods, the actual response of faculty to the incorporation of GAI in their evaluative techniques remains unclear. To investigate this, we have initiated a study delving into the mental constructs that engineering faculty hold about evaluation, focusing on their evolving attitudes and responses to GAI, as reported in the Fall of 2023. Adopting a long-term data-gathering strategy, we conducted a series of surveys, interviews, and recordings targeting the evaluative decision-making processes of a varied group of engineering educators across the United States. This paper presents the data collection process, our participants’ demographics, our data analysis plan, and initial findings based on the participants’ backgrounds, followed by our future work and potential implications. The analysis of the collected data will utilize qualitative thematic analysis in the next step of our study. Once we complete our study, we believe our findings will sketch the early stages of this emerging paradigm shift in the assessment of undergraduate engineering education, offering a novel perspective on the discourse surrounding evaluation strategies in the field. These insights are vital for stakeholders such as policymakers, educational leaders, and instructors, as they have significant ramifications for policy development, curriculum planning, and the broader dialogue on integrating GAI into educational evaluation.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available